59,441 research outputs found

    Mean first-passage time for maximal-entropy random walks in complex networks

    Full text link
    We perform an in-depth study for mean first-passage time (MFPT)---a primary quantity for random walks with numerous applications---of maximal-entropy random walks (MERW) performed in complex networks. For MERW in a general network, we derive an explicit expression of MFPT in terms of the eigenvalues and eigenvectors of the adjacency matrix associated with the network. For MERW in uncorrelated networks, we also provide a theoretical formula of MFPT at the mean-field level, based on which we further evaluate the dominant scalings of MFPT to different targets for MERW in uncorrelated scale-free networks, and compare the results with those corresponding to traditional unbiased random walks (TURW). We show that the MFPT to a hub node is much lower for MERW than for TURW. However, when the destination is a node with the least degree or a uniformly chosen node, the MFPT is higher for MERW than for TURW. Since MFPT to a uniformly chosen node measures real efficiency of search in networks, our work provides insight into general searching process in complex networks.Comment: Definitive version accepted for publication in Scientific Report

    Random walks in weighted networks with a perfect trap: An application of Laplacian spectra

    Full text link
    In this paper, we propose a general framework for the trapping problem on a weighted network with a perfect trap fixed at an arbitrary node. By utilizing the spectral graph theory, we provide an exact formula for mean first-passage time (MFPT) from one node to another, based on which we deduce an explicit expression for average trapping time (ATT) in terms of the eigenvalues and eigenvectors of the Laplacian matrix associated with the weighted graph, where ATT is the average of MFPTs to the trap over all source nodes. We then further derive a sharp lower bound for the ATT in terms of only the local information of the trap node, which can be obtained in some graphs. Moreover, we deduce the ATT when the trap is distributed uniformly in the whole network. Our results show that network weights play a significant role in the trapping process. To apply our framework, we use the obtained formulas to study random walks on two specific networks: trapping in weighted uncorrelated networks with a deep trap, the weights of which are characterized by a parameter, and L\'evy random walks in a connected binary network with a trap distributed uniformly, which can be looked on as random walks on a weighted network. For weighted uncorrelated networks we show that the ATT to any target node depends on the weight parameter, that is, the ATT to any node can change drastically by modifying the parameter, a phenomenon that is in contrast to that for trapping in binary networks. For L\'evy random walks in any connected network, by using their equivalence to random walks on a weighted complete network, we obtain the optimal exponent characterizing L\'evy random walks, which have the minimal average of ATTs taken over all target nodes.Comment: Definitive version accepted for publication in Physical Review

    Time-to-Event Model-Assisted Designs to Accelerate Phase I Clinical Trials

    Full text link
    Two useful strategies to speed up drug development are to increase the patient accrual rate and use novel adaptive designs. Unfortunately, these two strategies often conflict when the evaluation of the outcome cannot keep pace with the patient accrual rate and thus the interim data cannot be observed in time to make adaptive decisions. A similar logistic difficulty arises when the outcome is of late onset. Based on a novel formulation and approximation of the likelihood of the observed data, we propose a general methodology for model-assisted designs to handle toxicity data that are pending due to fast accrual or late-onset toxicity, and facilitate seamless decision making in phase I dose-finding trials. The dose escalation/de-escalation rules of the proposed time-to-event model-assisted designs can be tabulated before the trial begins, which greatly simplifies trial conduct in practice compared to that under existing methods. We show that the proposed designs have desirable finite and large-sample properties and yield performance that is superior to that of more complicated model-based designs. We provide user-friendly software for implementing the designs.Comment: 31 page

    Integrating Inter-vehicular Communication, Vehicle Localization, and a Digital Map for Cooperative Adaptive Cruise Control with Target Detection Loss

    Full text link
    Adaptive Cruise Control (ACC) is an Advanced Driver Assistance System (ADAS) that enables vehicle following with desired inter-vehicular distances. Cooperative Adaptive Cruise Control (CACC) is upgraded ACC that utilizes additional inter-vehicular wireless communication to share vehicle states such as acceleration to enable shorter gap following. Both ACC and CACC rely on range sensors such as radar to obtain the actual inter-vehicular distance for gap-keeping control. The range sensor may lose detection of the target, the preceding vehicle, on curvy roads or steep hills due to limited angle of view. Unfavourable weather conditions, target selection failure, or hardware issue may also result in target detection loss. During target detection loss, the vehicle following system usually falls back to Cruise Control (CC) wherein the follower vehicle maintains a constant speed. In this work, we propose an alternative way to obtain the inter-vehicular distance during target detection loss to continue vehicle following. The proposed algorithm integrates inter-vehicular communication, accurate vehicle localization, and a digital map with lane center information to approximate the inter-vehicular distance. In-lab robot following experiments demonstrated that the proposed algorithm provided desirable inter-vehicular distance approximation. Although the algorithm is intended for vehicle following application, it can also be used for other scenarios that demand vehicles' relative distance approximation. The work also showcases our in-lab development effort of robotic emulation of traffic for connected and automated vehicles

    Learning from Synthetic Data for Crowd Counting in the Wild

    Full text link
    Recently, counting the number of people for crowd scenes is a hot topic because of its widespread applications (e.g. video surveillance, public security). It is a difficult task in the wild: changeable environment, large-range number of people cause the current methods can not work well. In addition, due to the scarce data, many methods suffer from over-fitting to a different extent. To remedy the above two problems, firstly, we develop a data collector and labeler, which can generate the synthetic crowd scenes and simultaneously annotate them without any manpower. Based on it, we build a large-scale, diverse synthetic dataset. Secondly, we propose two schemes that exploit the synthetic data to boost the performance of crowd counting in the wild: 1) pretrain a crowd counter on the synthetic data, then finetune it using the real data, which significantly prompts the model's performance on real data; 2) propose a crowd counting method via domain adaptation, which can free humans from heavy data annotations. Extensive experiments show that the first method achieves the state-of-the-art performance on four real datasets, and the second outperforms our baselines. The dataset and source code are available at https://gjy3035.github.io/GCC-CL/.Comment: Accepted by CVPR201

    How to reach the orbital configuration of the inner three planets in HD 40307 Planet System ?

    Full text link
    The formation of the present configuration of three hot super-Earths in the planet system HD 40307 is a challenge to dynamical astronomers. With the two successive period ratios both near and slightly larger than 2, the system may have evolved from pairwise 2:1 mean motion resonances (MMRs). In this paper, we investigate the evolutions of the period ratios of the three planets after the primordial gas disk was depleted. Three routines are found to probably result in the current configuration under tidal dissipation with the center star, they are: (i) through apsidal alignment only; (ii) out of pairwise 2:1 MMRs, then through apsidal alignment; (iii) out of the 4:2:1 Laplace Resonance (LR) , then through apsidal alignment. All the three scenarios require the initial eccentricities of planets ∼0.15\sim0.15, which implies a planetary scattering history during and after the gas disk was depleted. All the three routines will go through the apsidal alignment phase, and enter a state with near-zero eccentricities finally. We also find some special characteristics for each routine. If the system went through pairwise 2:1 MMRs at the beginning, the MMR of the outer two planets would be broken first to reach the current state. As for routine (iii), the planets would be out of the Laplace Resonance at the place where some high-order resonances are located. At the high-order resonances 17:8 or 32:15 of the planets c and d, the system will possibly enter the current state as the final equilibrium.Comment: 16 pages, 8 figures, Accepted by SCIENCE CHINA Physics, Mechanics & Astronom

    Numerical Renormalization Group Calculations For Similarity Solutions and Travelling Waves

    Full text link
    We present a numerical implementation of the renormalization group (RG) for partial differential equations, constructing similarity solutions and travelling waves. We show that for a large class of well-localized initial conditions, successive iterations of an appropriately defined discrete RG transformation in space and time will drive the system towards a fixed point. This corresponds to a scale-invariant solution, such as a similarity or travelling-wave solution, which governs the long-time asymptotic behavior. We demonstrate that the numerical RG method is computationally very efficient.Comment: 14 pages, Postscript file of paper and 3 figures distributed as self-unpacking uuencoded compressed tar file.Also available by anonymous ftp to gijoe.mrl.uiuc.edu (128.174.119.153), file /pub/numrg.u

    A symmetric 2-tensor canonically associated to Q-curvature and its applications

    Full text link
    In this article, we define a symmetric 2-tensor canonically associated to Q-curvature called J-tensor on any Riemannian manifold with dimension at least three. The relation between J-tensor and Q-curvature is precisely like Ricci tensor and scalar curvature. Thus it can be interpreted as a higher-order analogue of Ricci tensor. This tensor can also be used to understand Chang-Gursky-Yang's theorem on 4-dimensional Q-singular metrics. Moreover, we show an Almost-Schur Lemma holds for Q-curvature, which gives an estimate of Q-curvature on closed manifolds.Comment: 14 pages, new remarks, references and acknowledgement added in the introductio

    Learning the mapping x↦∑i=1dxi2\mathbf{x}\mapsto \sum_{i=1}^d x_i^2: the cost of finding the needle in a haystack

    Full text link
    The task of using machine learning to approximate the mapping x↦∑i=1dxi2\mathbf{x}\mapsto\sum_{i=1}^d x_i^2 with xi∈[−1,1]x_i\in[-1,1] seems to be a trivial one. Given the knowledge of the separable structure of the function, one can design a sparse network to represent the function very accurately, or even exactly. When such structural information is not available, and we may only use a dense neural network, the optimization procedure to find the sparse network embedded in the dense network is similar to finding the needle in a haystack, using a given number of samples of the function. We demonstrate that the cost (measured by sample complexity) of finding the needle is directly related to the Barron norm of the function. While only a small number of samples is needed to train a sparse network, the dense network trained with the same number of samples exhibits large test loss and a large generalization gap. In order to control the size of the generalization gap, we find that the use of explicit regularization becomes increasingly more important as dd increases. The numerically observed sample complexity with explicit regularization scales as O(d2.5)\mathcal{O}(d^{2.5}), which is in fact better than the theoretically predicted sample complexity that scales as O(d4)\mathcal{O}(d^{4}). Without explicit regularization (also called implicit regularization), the numerically observed sample complexity is significantly higher and is close to O(d4.5)\mathcal{O}(d^{4.5})

    Estimating Densities with Non-Parametric Exponential Families

    Full text link
    We propose a novel approach for density estimation with exponential families for the case when the true density may not fall within the chosen family. Our approach augments the sufficient statistics with features designed to accumulate probability mass in the neighborhood of the observed points, resulting in a non-parametric model similar to kernel density estimators. We show that under mild conditions, the resulting model uses only the sufficient statistics if the density is within the chosen exponential family, and asymptotically, it approximates densities outside of the chosen exponential family. Using the proposed approach, we modify the exponential random graph model, commonly used for modeling small-size graph distributions, to address the well-known issue of model degeneracy.Comment: 22 pages, 5 figure
    • …
    corecore